Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Behav Res Methods ; 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38366119

RESUMO

Early work on selective attention used auditory-based tasks, such as dichotic listening, to shed light on capacity limitations and individual differences in these limitations. Today, there is great interest in individual differences in attentional abilities, but the field has shifted towards visual-modality tasks. Furthermore, most conflict-based tests of attention control lack reliability due to low signal-to-noise ratios and the use of difference scores. Critically, it is unclear to what extent attention control generalizes across sensory modalities, and without reliable auditory-based tests, an answer to this question will remain elusive. To this end, we developed three auditory-based tests of attention control that use an adaptive response deadline (DL) to account for speed-accuracy trade-offs: Auditory Simon DL, Auditory Flanker DL, and Auditory Stroop DL. In a large sample (N = 316), we investigated the psychometric properties of the three auditory conflict tasks, tested whether attention control is better modeled as a unitary factor or modality-specific factors, and estimated the extent to which unique variance in modality-specific factors contributed incrementally to the prediction of dichotic listening and multitasking performance. Our analyses indicated that the auditory conflict tasks have strong psychometric properties and demonstrate convergent validity with visual tests of attention control. Auditory and visual attention control factors were highly correlated (r = .81)-even after controlling for perceptual processing speed (r = .75). Modality-specific attention control factors accounted for unique variance in modality-matched criterion measures, but the majority of the explained variance was modality-general. The results suggest an interplay between modality-general attention control and modality-specific processing.

2.
Q J Exp Psychol (Hove) ; : 17470218231211905, 2023 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-37877182

RESUMO

Despite human accuracy in perceiving time, many factors can modulate the subjective experience of time. For example, it is widely reported that emotion can expand or shrink our perception of time and that temporal intervals are perceived as longer when marked by auditory stimuli than by visual stimuli. In the present study, we aimed at investigating whether the influence of emotion on time perception can be altered by the order in which emotional stimuli are presented and the sensory modality in which they are presented. Participants were asked to complete a time bisection task in which emotional stimuli were presented either acoustically or visually, and either before or after interval to be estimated. We observed a main effect of modality (longer perceived duration and lower variability in the auditory than in the visual modality) as well as a main effect of emotion (temporal overestimation for negative stimuli compared to neutral). Importantly, the effects of modality and emotion interacted with the order of presentation of the emotional stimuli. In the visual condition, when emotional stimuli were presented after the temporal intervals, participants overestimated time, but no differences between negative and neutral stimuli were observed when emotional stimuli were presented first. In the auditory condition, no significant effect of emotion on perceived duration was found. Results suggest that negative emotions affect our perception of durations acting at the decision-making stage rather than at the pacemaker one. No effect on time perception was observed for emotional auditory stimuli.

3.
Front Hum Neurosci ; 16: 943478, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35992945

RESUMO

Background: Attention deficit hyperactivity disorder (ADHD) is clinically diagnosed; however, quantitative analysis to statistically analyze the symptom severity of children with ADHD via the measurement of head movement is still in progress. Studies focusing on the cues that may influence the attention of children with ADHD in classroom settings, where children spend a considerable amount of time, are relatively scarce. Virtual reality allows real-life simulation of classroom environments and thus provides an opportunity to test a range of theories in a naturalistic and controlled manner. The objective of this study was to investigate the correlation between participants' head movements and their reports of inattention and hyperactivity, and to investigate how their head movements are affected by different social cues of different sensory modalities. Methods: Thirty-seven children and adolescents with (n = 20) and without (n = 17) ADHD were recruited for this study. All participants were assessed for diagnoses, clinical symptoms, and self-reported symptoms. A virtual reality-continuous performance test (VR-CPT) was conducted under four conditions: (1) control, (2) no-cue, (3) visual cue, and (4) visual/audio cue. A quantitativecomparison of the participants' head movements was conducted in three dimensions (pitch [head nods], yaw [head turns], and roll [lateral head inclinations]) using a head-mounted display (HMD) in a VR classroom environment. Task-irrelevant head movements were analyzed separately, considering the dimension of movement needed to perform the VR-CPT. Results: The magnitude of head movement, especially task-irrelevant head movement, significantly correlated with the current standard of clinical assessment in the ADHD group. Regarding the four conditions, head movement showed changes according to the complexity of social cues in both the ADHD and healthy control (HC) groups. Conclusion: Children and adolescents with ADHD showed decreasing task-irrelevant movements in the presence of social stimuli toward the intended orientation. As a proof-of-concept study, this study preliminarily identifies the potential of VR as a tool to understand and investigate the classroom behavior of children with ADHD in a controlled, systematic manner.

4.
Appl Ergon ; 105: 103842, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35868052

RESUMO

Multimodal interaction (MMI) is being widely implemented, especially in new technologies such as augmented reality (AR) systems since it is presumed to support a more natural, efficient, and flexible form of interaction. However, limited research has been done to investigate the proper application of MMI in AR. More specifically, the effects of combining different input and output modalities during MMI in AR are still not fully understood. Therefore, this study aims to examine the independent and combined effects of different input and output modalities during a typical AR task. 20 young adults participated in a controlled experiment in which they were asked to perform a simple identification task using an AR device in different input (speech, gesture, multimodal) and output (VV-VA, VV-NA, NV-VA, NV-NA) conditions. Results showed that there were differences in the influence of input and output modalities on task performance, workload, perceived appropriateness, and user preference. Interaction effects between the input and output conditions on the performance metrics were also evident in this study, suggesting that although multimodal input is generally preferred by the users, it should be implemented with caution since its effectiveness is highly influenced by the processing code of the system output. This study, which is the first of its kind, has revealed several new implications regarding the application of MMI in AR systems.

5.
Neurosci Biobehav Rev ; 140: 104797, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35902045

RESUMO

For efficient navigation, the brain needs to adequately represent the environment in a cognitive map. In this review, we sought to give an overview of literature about cognitive map formation based on non-visual modalities in persons with blindness (PWBs) and sighted persons. The review is focused on the auditory and haptic modalities, including research that combines multiple modalities and real-world navigation. Furthermore, we addressed implications of route and survey representations. Taking together, PWBs as well as sighted persons can build up cognitive maps based on non-visual modalities, although the accuracy sometime somewhat differs between PWBs and sighted persons. We provide some speculations on how to deploy information from different modalities to support cognitive map formation. Furthermore, PWBs and sighted persons seem to be able to construct route as well as survey representations. PWBs can experience difficulties building up a survey representation, but this is not always the case, and research suggests that they can acquire this ability with sufficient spatial information or training. We discuss possible explanations of these inconsistencies.


Assuntos
Cegueira , Tecnologia Háptica , Encéfalo , Cognição , Humanos , Visão Ocular
6.
Anim Cogn ; 25(6): 1557-1566, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35674910

RESUMO

Little research has been conducted on dogs' (Canis familiaris) ability to integrate information obtained through different sensory modalities during object discrimination and recognition tasks. Such a process would indicate the formation of multisensory mental representations. In Experiment 1, we tested the ability of 3 Gifted Word Learner (GWL) dogs that can rapidly learn the verbal labels of toys, and 10 Typical (T) dogs to discriminate an object recently associated with a reward, from distractor objects, under light and dark conditions. While the success rate did not differ between the two groups and conditions, a detailed behavioral analysis showed that all dogs searched for longer and sniffed more in the dark. This suggests that, when possible, dogs relied mostly on vision, and switched to using only other sensory modalities, including olfaction, when searching in the dark. In Experiment 2, we investigated whether, for the GWL dogs (N = 4), hearing the object verbal labels activates a memory of a multisensory mental representation. We did so by testing their ability to recognize objects based on their names under dark and light conditions. Their success rate did not differ between the two conditions, whereas the dogs' search behavior did, indicating a flexible use of different sensory modalities. Little is known about the cognitive mechanisms involved in the ability of GWL dogs to recognize labeled objects. These findings supply the first evidence that for GWL dogs, verbal labels evoke a multisensory mental representation of the objects.


Assuntos
Cognição , Reconhecimento Psicológico , Animais , Cães , Reconhecimento Psicológico/fisiologia , Aprendizagem , Olfato
7.
Anim Cogn ; 25(5): 1019-1028, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35708854

RESUMO

Communication is the process by which one emitter conveys information to one or several receivers to induce a response (behavioral or physiological) by the receiver. Communication plays a major role in various biological functions and may involve signals and cues from different sensory modalities. Traditionally, investigations of animal communication focused on a single sensory modality, yet communication is often multimodal. As these different processes may be quite complex and therefore difficult to disentangle, one approach is to first study each sensorial modality separately. With this refined understanding of individual senses, revealing how they interact becomes possible as the characteristics and properties of each modality can be accounted for, making a multimodal approach feasible. Using this framework, researchers undertook systematic, experimental investigations on mother-pup recognition processes in a colonial pinniped species, the Australian sea lion Neophoca cinerea. The research first assessed the abilities of mothers and pups to identify each other by their voice using playback experiments. Second, they assessed whether visual cues are used by both mothers and pups to distinguish them from conspecifics, and/or whether females discriminate the odor of their filial pup from those from non-filial pups. Finally, to understand if the information transmitted by different sensory modalities is analyzed synergistically or if there is a hierarchy among the sensory modalities, experiments were performed involving different sensory cues simultaneously. These findings are discussed with regards to the active space of each sensory cue, and of the potential enhancements that may arise by assessing information from different modalities.


Assuntos
Leões-Marinhos , Animais , Feminino , Austrália , Sinais (Psicologia) , Mães , Reconhecimento Psicológico , Leões-Marinhos/fisiologia
8.
Vis Comput ; 38(8): 2939-2970, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34131356

RESUMO

The research progress in multimodal learning has grown rapidly over the last decade in several areas, especially in computer vision. The growing potential of multimodal data streams and deep learning algorithms has contributed to the increasing universality of deep multimodal learning. This involves the development of models capable of processing and analyzing the multimodal information uniformly. Unstructured real-world data can inherently take many forms, also known as modalities, often including visual and textual content. Extracting relevant patterns from this kind of data is still a motivating goal for researchers in deep learning. In this paper, we seek to improve the understanding of key concepts and algorithms of deep multimodal learning for the computer vision community by exploring how to generate deep models that consider the integration and combination of heterogeneous visual cues across sensory modalities. In particular, we summarize six perspectives from the current literature on deep multimodal learning, namely: multimodal data representation, multimodal fusion (i.e., both traditional and deep learning-based schemes), multitask learning, multimodal alignment, multimodal transfer learning, and zero-shot learning. We also survey current multimodal applications and present a collection of benchmark datasets for solving problems in various vision domains. Finally, we highlight the limitations and challenges of deep multimodal learning and provide insights and directions for future research.

9.
Arch Sex Behav ; 50(8): 3799-3808, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34637046

RESUMO

Social perception is a multimodal process involving vision and audition as central input sources for human social cognitive processes. However, it remains unclear how profoundly deaf people assess others in the context of mating and social interaction. The current study explored the relative importance of different sensory modalities (vision, smell, and touch) in assessments of opposite- and same-sex strangers. We focused on potential sensory compensation processes in mate selection (i.e., increased importance of the intact senses in forming impressions of an opposite-sex stranger as a potential partner). A total of 74 deaf individuals and 100 normally hearing controls were included in the study sample. We found diminished importance of vision and smell in deaf participants compared with controls for opposite- and same-sex strangers, and increased importance of touch for the assessment of same-sex strangers. The results suggested that deaf people rely less on visual and olfactory cues in mating and social assessments, highlighting a possible role of sign language in shaping interpersonal tactile experience in non-romantic relationships.


Assuntos
Surdez , Percepção Auditiva , Sinais (Psicologia) , Humanos , Olfato , Tato
10.
Front Neurogenom ; 2: 625343, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-38236482

RESUMO

The phenomenon of mind wandering (MW), as a family of experiences related to internally directed cognition, heavily influences vigilance evolution. In particular, humans in teleoperations monitoring partially automated fleet before assuming manual control whenever necessary may see their attention drift due to internal sources; as such, it could play an important role in the emergence of out-of-the-loop (OOTL) situations and associated performance problems. To follow, quantify, and mitigate this phenomenon, electroencephalogram (EEG) systems already demonstrated robust results. As MW creates an attentional decoupling, both ERPs and brain oscillations are impacted. However, the factors influencing these markers in complex environments are still not fully understood. In this paper, we specifically addressed the possibility of gradual emergence of attentional decoupling and the differences created by the sensory modality used to convey targets. Eighteen participants were asked to (1) supervise an automated drone performing an obstacle avoidance task (visual task) and (2) respond to infrequent beeps as fast as possible (auditory task). We measured event-related potentials and alpha waves through EEG. We also added a 40-Hz amplitude modulated brown noise to evoke steady-state auditory response (ASSR). Reported MW episodes were categorized between task-related and task-unrelated episodes. We found that N1 ERP component elicited by beeps had lower amplitude during task-unrelated MW, whereas P3 component had higher amplitude during task-related MW, compared with other attentional states. Focusing on parieto-occipital regions, alpha-wave activity was higher during task-unrelated MW compared with others. These results support the decoupling hypothesis for task-unrelated MW but not task-related MW, highlighting possible variations in the "depth" of decoupling depending on MW episodes. Finally, we found no influence of attentional states on ASSR amplitude. We discuss possible reasons explaining why. Results underline both the ability of EEG to track and study MW in laboratory tasks mimicking ecological environments, as well as the complex influence of perceptual decoupling on operators' behavior and, in particular, EEG measures.

11.
Behav Sci (Basel) ; 10(7)2020 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-32698450

RESUMO

Coping is a survival mechanism of living organisms. It is not merely reactive, but also involves making sense of the environment by rendering sensory information into percepts that have meaning in the context of an organism's cognitions. Music listening, on the other hand, is a complex task that embraces sensory, physiological, behavioral, and cognitive levels of processing. Being both a dispositional process that relies on our evolutionary toolkit for coping with the world and a more elaborated skill for sense-making, it goes beyond primitive action-reaction couplings by the introduction of higher-order intermediary variables between sensory input and effector reactions. Consideration of music-listening from the perspective of coping treats music as a sound environment and listening as a process that involves exploration of this environment as well as interactions with the sounds. Several issues are considered in this regard such as the conception of music as a possible stressor, the role of adaptive listening, the relation between coping and reward, the importance of self-regulation strategies in the selection of music, and the instrumental meaning of music in the sense that it can be used to modify the internal and external environment of the listener.

12.
J Sport Exerc Psychol ; 42(1): 15-25, 2020 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-31883505

RESUMO

In 2 experiments, the authors investigated the effects of bimodal integration in a sport-specific task. Beach volleyball players were required to make a tactical decision, responding either verbally or via a motor response, after being presented with visual, auditory, or both kinds of stimuli in a beach volleyball scenario. In Experiment 1, players made the correct decision in a game situation more often when visual and auditory information were congruent than in trials in which they experienced only one of the modalities or incongruent information. Decision-making accuracy was greater when motor, rather than verbal, responses were given. Experiment 2 replicated this congruence effect using different stimulus material and showed a decreasing effect of visual stimulation on decision making as a function of shorter visual stimulus durations. In conclusion, this study shows that bimodal integration of congruent visual and auditory information results in more accurate decision making in sport than unimodal information.

13.
Atten Percept Psychophys ; 82(3): 1473-1487, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31741318

RESUMO

In this 3-experiment study, the Weber fractions in the 300-ms and 900-ms duration ranges are obtained with 9 types of empty intervals resulting from the combinations of three types of signals for marking the beginning and end of the signals: auditory (A), visual (V), or tactile (T). There were three types of intramodal intervals (AA, TT, and VV) and 6 types of intermodal intervals (AT, AV, VA, VT, TA, and TV). The second marker is always the same during Experiments 1 (A), 2 (V), and 3 (T). With an uncertainty strategy where the first marker is 1 of 2 sensory signals being presented randomly from trial to trial, the study provides direct comparisons of the perceived length of the different marker-type intervals. The results reveal that the Weber fraction is nearly constant in the three types of intramodal intervals, but is clearly lower at 900 ms than at 300 ms in intermodal conditions. In several cases, the intramodal intervals are perceived as shorter than intermodal intervals, which is interpreted as an effect of the efficiency in detecting the second marker of an intramodal interval. There were no significant differences between the TA and VA intervals (Experiment 1) and between the AV and TV intervals (Experiment 2), but in Experiment 3, the AT intervals were perceived as longer than the VT intervals. The results are interpreted in terms of the generalized form of Weber's law, using the properties of the signals for explaining the additional nontemporal noise observed in the intermodal conditions.


Assuntos
Percepção do Tempo , Percepção Auditiva , Humanos , Ruído , Estimulação Luminosa
15.
Appetite ; 142: 104346, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31278955

RESUMO

The successful promotion of vegetable consumption by children requires a deep understanding of children's vegetable preferences as well as the factors shaping them throughout childhood. This study analyzed children vegetable liking in four different age ranges (5-6, 7-8, 9-10 and 11-12 years old) in Chile, China and the United States. Three hundred and eighty-four children completed this study. All participants tasted and rated 14 different vegetables for liking and described the samples using Check-All-That-Apply (CATA). We found significant differences in degree of overall liking among children from the three countries (p < 0.001). Specifically, children in China gave higher overall liking scores than children in the US, and in the US higher than in Chile. Child age and gender did not influence children's vegetable overall liking across the three countries. Across all countries and age groups, liking of taste and texture were the best predictors of children overall liking. The penalty analysis of CATA selections by children showed that the mean impact of the attributes that children used to describe the samples on their liking varied among countries, with the descriptors having the least impact on liking for Chinese children.


Assuntos
Comparação Transcultural , Preferências Alimentares/etnologia , Verduras , Criança , Pré-Escolar , Chile , China , Escolaridade , Feminino , Humanos , Masculino , Sensação , Olfato , Paladar , Estados Unidos
16.
Front Psychol ; 10: 1076, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31316410

RESUMO

While technology is increasingly used in the classroom, we observe at the same time that making teachers and students accept it is more difficult than expected. In this work, we focus on multisensory technologies and we argue that the intersection between current challenges in pedagogical practices and recent scientific evidence opens novel opportunities for these technologies to bring a significant benefit to the learning process. In our view, multisensory technologies are ideal for effectively supporting an embodied and enactive pedagogical approach exploiting the best-suited sensory modality to teach a concept at school. This represents a great opportunity for designing technologies, which are both grounded on robust scientific evidence and tailored to the actual needs of teachers and students. Based on our experience in technology-enhanced learning projects, we propose six golden rules we deem important for catching this opportunity and fully exploiting it.

17.
Atten Percept Psychophys ; 81(3): 823-845, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30569434

RESUMO

Performance in temporal difference threshold and estimation tasks is markedly less accurate for visual than for auditory intervals. In addition, thresholds and estimates are likewise less accurate for empty than for filled intervals. In scalar timing theory, these differences have been explained as alterations in pacemaker rate, which is faster for auditory and filled intervals than for visual and empty intervals. We tested this explanation according to three research aims. First, we replicated the threshold and estimation tasks of Jones, Poliakoff, and Wells (Quarterly Journal of Experimental Psychology, 62, 2171-2186, 2009) and found the well-documented greater precision for auditory than visual intervals, and for filled than for empty intervals. Second, we considered inter-individual differences in these classic effects and found that up to 27% of participants exhibited opposite patterns. Finally, we examined intra-individual differences to investigate (i) whether thresholds and estimates correlate within each stimulus condition and (ii) whether the stimulus condition in which a participants' pacemaker rate was highest was the same in both tasks. Here we found that if pacemaker rate is indeed a driving factor for thresholds and estimates, its effect may be greater for empty intervals, where the two tasks correlate, than for filled intervals, where they do not. In addition, it was more common for participants to perform best in different modalities in each task, though this was not true for ordinal intra-individual differences in the filled-duration illusion. Overall, this research presents several findings inconsistent with the pacemaker rate explanation.


Assuntos
Percepção Auditiva , Limiar Diferencial , Ilusões , Percepção do Tempo , Percepção Visual , Adulto , Humanos , Individualidade , Pessoa de Meia-Idade , Fatores de Tempo , Adulto Jovem
18.
Front Psychol ; 9: 989, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30038588

RESUMO

Even though innate behaviors are essential for assuring quick responses to expected stimuli, experience-dependent behavioral plasticity confers an advantage when unexpected conditions arise. As being rigidly responsive to too many stimuli can be biologically expensive, adapting preferences to time-dependent relevant environmental conditions provide a cheaper and wider behavioral reactivity. According to their specific life habits, animals prioritize different sensory modalities to maximize environment exploitation. Besides, when mediating learning processes, the salience of a stimulus usually plays a relevant role in determining the intensity of an association. Then, sensory prioritization might reflect an heterogeneity in the cognitive abilities of an individual. Here, we analyze in the kissing bug Rhodnius prolixus if stimuli from different sensory modalities generate different cognitive capacities under an operant aversive paradigm. In a 2-choice walking arena, by registering the spatial distribution of insects over an experimental arena, we evaluated firstly the innate responses of bugs confronted to mechanical (rough substrate), visual (green light), thermal (32°C heated plate), hygric (humidified substrate), gustatory (sodium chloride), and olfactory (isobutyric acid) stimuli. In further experimental series bugs were submitted to an aversive operant conditioning by pairing each stimulus with a negative reinforcement. Subsequent tests allowed us to analyze if the innate behaviors were modulated by such previous aversive experience. In our experimental setup mechanical and visual stimuli were neutral, the thermal cue was attractive, and the hygric, gustatory and olfactory ones were innately aversive. After the aversive conditioning, responses to the mechanical, the visual, the hygric and the gustatory stimuli were modulated while responses to the thermal and the olfactory stimuli remained rigid. We present evidences that the spatial learning capacities of R. prolixus are dependent on the sensory modality of the conditioned stimulus, regardless their innate valence (i.e., neutral, attractive, or aversive). These differences might be given by the biological relevance of the stimuli and/or by evolutionary aspects of the life traits of this hematophagous insect.

19.
Arch Sex Behav ; 47(3): 597-603, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29396613

RESUMO

Human attractiveness is a potent social variable, and people assess their potential partners based on input from a range of sensory modalities. Among all sensory cues, visual signals are typically considered to be the most important and most salient source of information. However, it remains unclear how people without sight assess others. In the current study, we explored the relative importance of sensory modalities other than vision (smell, touch, and audition) in the assessment of same- and opposite-sex strangers. We specifically focused on possible sensory compensation in mate selection, defined as enhanced importance of modalities other than vision among blind individuals in their choice of potential partners. Data were obtained from a total of 119 participants, of whom 78 were blind people aged between 16 and 65 years (M = 42.4, SD = 12.6; 38 females) and a control sample of 41 sighted people aged between 20 and 64. As hypothesized, we observed a compensatory effect of blindness on auditory perception. Our data indicate that visual impairment increases the importance of audition in different types of social assessments for both sexes and in mate choice for blind men.


Assuntos
Cegueira/psicologia , Parceiros Sexuais/psicologia , Comportamento Verbal , Adolescente , Adulto , Percepção Auditiva/fisiologia , Emoções/fisiologia , Feminino , Humanos , Masculino , Casamento , Pessoa de Meia-Idade , Olfato/fisiologia , Tato/fisiologia , Adulto Jovem
20.
Front Hum Neurosci ; 12: 513, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30631268

RESUMO

Background: Recent studies have reported altered efficiency in selective brain regions and functional networks in patients with alcohol use disorder (AUD). Inefficient processing can reflect or arise from the disorganization of information being conveyed from place to place. However, it remains unknown whether the efficiency and functional connectivity are altered in large-scale topological organization of patients with AUD. Methods: Resting-state functional magnetic resonance imaging (rsfMRI) data were experimentally collected from 21 right-handed males with AUD and 21 right-handed, age-, gender- and education-matched healthy controls (HCs). Graph theory was used to investigate inter-group differences in the topological parameters (global and nodal) of networks and inter-regional functional connectivity. Correlations between group differences in network properties and clinical variables were also investigated in the AUD group. Results: The brain networks of the AUD group showed decreased global efficiency when compared with the HC group. Besides, increased nodal efficiency was found in the left orbitofrontal cortex (OFC), while reduced nodal efficiency was observed in the right OFC, right fusiform gyrus (FFG), right superior temporal gyrus, right inferior occipital gyrus (IOG), and left insula. Moreover, hypo-connectivity was detected between the right dorsolateral prefrontal cortex (DLPFC) and right superior occipital gyrus (SOG) in the AUD group when compared with the HC group. The nodal efficiency of the left OFC was associated with cognitive performance in the AUD group. Conclusions: AUD patients exhibited alterations in brain network efficiency and functional connectivity, particularly in regions linked to multi-sensory modalities. These disrupted topological properties may help to obtain a more comprehensive understanding of large-scale brain network activity. Furthermore, these data provide a potential neural mechanism of impaired cognition in individuals with AUD.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...